Federated learning has attracted increasing attention with the emergence of distributed data. While extensive federated learning algorithms have been proposed for the non-convex distributed problem, the federated learning in practice still faces numerous challenges, such as the large training iterations to converge since the sizes of models and datasets keep increasing, and the lack of adaptivity by SGD-based model updates. Meanwhile, the study of adaptive methods in federated learning is scarce and existing works either lack a complete theoretical convergence guarantee or have slow sample complexity. In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on the momentum-based variance reduced technique in cross-silo FL. We first explore how to design the adaptive algorithm in the FL setting. By providing a counter-example, we prove that a simple combination of FL and adaptive methods could lead to divergence. More importantly, we provide a convergence analysis for our method and prove that our algorithm is the first adaptive FL algorithm to reach the best-known samples $O(\epsilon^{-3})$ and $O(\epsilon^{-2})$ communication rounds to find an $\epsilon$-stationary point without large batches. The experimental results on the language modeling task and image classification task with heterogeneous data demonstrate the efficiency of our algorithms.
translated by 谷歌翻译
时间是时间序列最重要的特征之一,但没有得到足够的关注。先前的时间序列预测研究主要集中于将过去的子序列(查找窗口)映射到未来的系列(预测窗口),而系列的时间通常只是在大多数情况下都扮演辅助角色。由于这些窗口中的点处理,将其推断为长期未来在模式上是艰难的。为了克服这一障碍,我们提出了一个名为DateFormer的全新时间序列预测框架,他将注意力转移到建模时间上,而不是遵循上述实践。具体而言,首先按时间序列分为补丁,以监督通过Transformers(DERT)的日期编码器表示的动态日期代表的学习。然后将这些表示形式馈入一个简单的解码器,以产生更粗的(或全局)预测,并用于帮助模型从回顾窗口中寻求有价值的信息,以学习精致(或本地)的预测。 DateFormer通过将上述两个部分求和来获得最终结果。我们对七个基准测试的经验研究表明,与序列建模方法相比,时间模型方法对于长期序列预测更有效。 DateFormer产生最先进的准确性,相对改进40%,并将最大可靠的预测范围扩大到半年水平。
translated by 谷歌翻译
在本文中,我们提出了一类更快的自适应梯度下降上升(GDA)方法,用于基于统一的自适应矩阵求解基于统一的自适应矩阵的非膨胀强度凹入的最小值问题,该问题包括几乎存在的坐标和全局自适应学习率。具体而言,我们提出了一种基于基本动量技术的快速自适应梯度体面上升(Adagda)方法,该方法达到$ O(\ Kappa ^ 4 \ epsilon ^ { - 4})$的较低梯度复杂度,用于查找$ \ epsilon $ -Sationary点没有大批次,这通过$ o(\ sqrt {\ kappa})$。与此同时,我们提出了一种基于势头的阶段的adagda(VR-Adagda)方法的加速版本,这使得可以实现$ O(\ kappa ^ {4.5} \ epsilon ^ { - 3的较低梯度复杂度为了查找$ \ epsilon $ -stationary点,没有大批次,这将通过$ o(\ epsilon ^ {-1})为现有的自适应GDA方法的结果提高了现有的自适应GDA方法。此外,我们证明了我们的VR-Adagda方法达到了$ O(\ Kappa ^ {3} \ epsilon ^ { - 3})$的最佳已知的渐变复杂度$ 。特别是,我们为我们的自适应GDA方法提供了有效的收敛分析框架。关于政策评估和公平分类器任务的一些实验结果展示了我们算法的效率。
translated by 谷歌翻译
自适应梯度方法对解决许多机器学习问题的性能具有出色的性能。尽管最近研究了多种自适应方法,它们主要专注于经验或理论方面,并且还通过使用一些特定的自适应学习率来解决特定问题。希望为解决一般问题的理论保证来设计一种普遍的自适应梯度算法框架。为了填补这一差距,我们通过引入包括大多数现有自适应梯度形式的通用自适应矩阵提出了一种更快和普遍的自适应梯度框架(即,Super-Adam)。此外,我们的框架可以灵活地集成了减少技术的势头和方差。特别是,我们的小说框架为非透露设置下的自适应梯度方法提供了收敛分析支持。在理论分析中,我们证明我们的超亚当算法可以实现$ \ tilde {o}(\ epsilon ^ { - 3})$的最着名的复杂性,用于查找$ \ epsilon $ -stationary points的非核心优化,这匹配随机平滑非渗透优化的下限。在数值实验中,我们采用各种深度学习任务来验证我们的算法始终如一地优于现有的自适应算法。代码可在https://github.com/lijunyi95/superadam获得
translated by 谷歌翻译
In the paper, we study a class of useful minimax problems on Riemanian manifolds and propose a class of effective Riemanian gradient-based methods to solve these minimax problems. Specifically, we propose an effective Riemannian gradient descent ascent (RGDA) algorithm for the deterministic minimax optimization. Moreover, we prove that our RGDA has a sample complexity of $O(\kappa^2\epsilon^{-2})$ for finding an $\epsilon$-stationary solution of the Geodesically-Nonconvex Strongly-Concave (GNSC) minimax problems, where $\kappa$ denotes the condition number. At the same time, we present an effective Riemannian stochastic gradient descent ascent (RSGDA) algorithm for the stochastic minimax optimization, which has a sample complexity of $O(\kappa^4\epsilon^{-4})$ for finding an $\epsilon$-stationary solution. To further reduce the sample complexity, we propose an accelerated Riemannian stochastic gradient descent ascent (Acc-RSGDA) algorithm based on the momentum-based variance-reduced technique. We prove that our Acc-RSGDA algorithm achieves a lower sample complexity of $\tilde{O}(\kappa^{4}\epsilon^{-3})$ in searching for an $\epsilon$-stationary solution of the GNSC minimax problems. Extensive experimental results on the robust distributional optimization and robust Deep Neural Networks (DNNs) training over Stiefel manifold demonstrate efficiency of our algorithms.
translated by 谷歌翻译
在论文中,我们提出了一类加速的零顺序,用于非凸迷你优化和最小值优化的一类加速的零序命令和一流的动量方法。具体而言,我们提出了一种新的加速零级动量(ACC-ZOM)方法,用于黑箱迷你优化。此外,我们证明我们的ACC-ZOM方法达到$ \ TILDE {O}的较低查询复杂性(D ^ {3/4} \ epsilon ^ {-3})$寻找$ \ epsilon $ -stationary point,这通过$ o(d ^ {1/4})$ of the $ d $表示可变尺寸。特别是,ACC-ZOM不需要现有的零点随机算法中所需的大批次。同时,我们提出了一种加速\ TextBF {Zeroth-Order} moneotum血管下降(ACC-ZOMDA)方法,用于\ TextBF {Black-Box} Minimax-Optimization,它获得$ \ TINDE {O}的查询复杂性((d_1 + d_2)^ {3/4} \ kappa_y ^ {4.5} \ epsilon ^ { - 3})$没有大批次查找$ \ epsilon $ -stationary point,其中$ d_1 $和$ d_2 $ demote变量尺寸和$ \ kappa_y $是条件号。此外,我们提出了一种加速\ TextBF {一阶}势头血管下降(ACC-MDA)方法,用于\ textBF {White-Box} Minimax优化,它具有$ \ tilde {o}(\ kappa_y ^ { 4.5} \ epsilon ^ { - 3})$无大批次查找$ \ epsilon $ -stationary point。特别是,我们的ACC-MDA可以获得$ \ tilde {o}(\ kappa_y ^ {2.5} \ epsilon ^ {-3})$的较低渐变复杂性,具有批量尺寸$ o(\ kappa_y ^ 4)$。对黑匣子对抗攻击深度神经网络(DNN)和中毒攻击的广泛实验结果表明了我们算法的效率。
translated by 谷歌翻译
Non-line-of-sight (NLOS) imaging aims to reconstruct the three-dimensional hidden scenes from the data measured in the line-of-sight, which uses photon time-of-flight information encoded in light after multiple diffuse reflections. The under-sampled scanning data can facilitate fast imaging. However, the resulting reconstruction problem becomes a serious ill-posed inverse problem, the solution of which is of high possibility to be degraded due to noises and distortions. In this paper, we propose two novel NLOS reconstruction models based on curvature regularization, i.e., the object-domain curvature regularization model and the dual (i.e., signal and object)-domain curvature regularization model. Fast numerical optimization algorithms are developed relying on the alternating direction method of multipliers (ADMM) with the backtracking stepsize rule, which are further accelerated by GPU implementation. We evaluate the proposed algorithms on both synthetic and real datasets, which achieve state-of-the-art performance, especially in the compressed sensing setting. All our codes and data are available at https://github.com/Duanlab123/CurvNLOS.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译